133 research outputs found

    Identity based proxy re-encryption scheme (IBPRE+) for secure cloud data sharing

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In proxy re-encryption (PRE), a proxy with re-encryption keys can transfer aciphertext computed under Alice's public key into a new one, which can be decrypted by Bob only with his secret key. Recently, Wang et al. introduced the concept of PRE plus (PRE+) scheme, which can be seen as the dual of PRE, and is almost the same as PRE scheme except that the re-encryption keys are generated by the encrypter. Compared to PRE, PRE+ scheme can easily achieve two important properties: first, the message-level based fine-grained delegation and, second, the non-transferable property. In this paper, we extend the concept of PRE+ to the identity based setting. We propose a concrete IBPRE+ scheme based on 3-linear map and roughly discuss its properties. We also demonstrate potential application of this new primitive to secure cloud data sharing.Peer ReviewedPostprint (author's final draft

    Controlled secure social cloud data sharing based on a novel identity based proxy re-encryption plus scheme

    Get PDF
    Currently we are witnessing a rapid integration of social networks and cloud computing, especially on storing social media contents on cloud storage due to its cheap management and easy accessing at any time and from any place. However, how to securely store and share social media contents such as pictures/videos among social groups is still a very challenging problem. In this paper, we try to tackle this problem by using a new cryptographic primitive: identity based proxy re-encryption plus (IBPRE ), which is a variant of proxy re-encryption (PRE). In PRE, by using re-encryption keys, a ciphertext computed for Alice can be transferred to a new one for Bob. Recently, the concept of PRE plus (PRE) was introduced by Wang et al. In PRE, all the algorithms are almost the same as traditional PRE, except the re-encryption keys are generated by the encrypter instead of the delegator. The message-level based fine-grained delegation property and the weak non-transferable property can be easily achieved by PRE , while traditional PRE cannot achieve them. Based on the 3-linear map, we first propose a new IBE scheme and a new IBPRE scheme, we prove the security of these schemes and give the properties and performance analysis of the new IBPRE scheme. Finally, we propose a new framework based on this new primitive for secure cloud social data sharingPeer ReviewedPostprint (author's final draft

    Dual Contrastive Network for Sequential Recommendation with User and Item-Centric Perspectives

    Full text link
    With the outbreak of today's streaming data, sequential recommendation is a promising solution to achieve time-aware personalized modeling. It aims to infer the next interacted item of given user based on history item sequence. Some recent works tend to improve the sequential recommendation via randomly masking on the history item so as to generate self-supervised signals. But such approach will indeed result in sparser item sequence and unreliable signals. Besides, the existing sequential recommendation is only user-centric, i.e., based on the historical items by chronological order to predict the probability of candidate items, which ignores whether the items from a provider can be successfully recommended. The such user-centric recommendation will make it impossible for the provider to expose their new items and result in popular bias. In this paper, we propose a novel Dual Contrastive Network (DCN) to generate ground-truth self-supervised signals for sequential recommendation by auxiliary user-sequence from item-centric perspective. Specifically, we propose dual representation contrastive learning to refine the representation learning by minimizing the euclidean distance between the representations of given user/item and history items/users of them. Before the second contrastive learning module, we perform next user prediction to to capture the trends of items preferred by certain types of users and provide personalized exploration opportunities for item providers. Finally, we further propose dual interest contrastive learning to self-supervise the dynamic interest from next item/user prediction and static interest of matching probability. Experiments on four benchmark datasets verify the effectiveness of our proposed method. Further ablation study also illustrates the boosting effect of the proposed components upon different sequential models.Comment: 23 page

    Foundation Model Based Native AI Framework in 6G with Cloud-Edge-End Collaboration

    Full text link
    Future wireless communication networks are in a position to move beyond data-centric, device-oriented connectivity and offer intelligent, immersive experiences based on task-oriented connections, especially in the context of the thriving development of pre-trained foundation models (PFM) and the evolving vision of 6G native artificial intelligence (AI). Therefore, redefining modes of collaboration between devices and servers and constructing native intelligence libraries become critically important in 6G. In this paper, we analyze the challenges of achieving 6G native AI from the perspectives of data, intelligence, and networks. Then, we propose a 6G native AI framework based on foundation models, provide a customization approach for intent-aware PFM, present a construction of a task-oriented AI toolkit, and outline a novel cloud-edge-end collaboration paradigm. As a practical use case, we apply this framework for orchestration, achieving the maximum sum rate within a wireless communication system, and presenting preliminary evaluation results. Finally, we outline research directions for achieving native AI in 6G.Comment: 8 pages, 4 figures, 1 tabl

    Impact of collimator leaf width and treatment technique on stereotactic radiosurgery and radiotherapy plans for intra- and extracranial lesions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This study evaluated the dosimetric impact of various treatment techniques as well as collimator leaf width (2.5 vs 5 mm) for three groups of tumors – spine tumors, brain tumors abutting the brainstem, and liver tumors. These lesions often present challenges in maximizing dose to target volumes without exceeding critical organ tolerance. Specifically, this study evaluated the dosimetric benefits of various techniques and collimator leaf sizes as a function of lesion size and shape.</p> <p>Methods</p> <p>Fifteen cases (5 for each site) were studied retrospectively. All lesions either abutted or were an integral part of critical structures (brainstem, liver or spinal cord). For brain and liver lesions, treatment plans using a 3D-conformal static technique (3D), dynamic conformal arcs (DARC) or intensity modulation (IMRT) were designed with a conventional linear accelerator with standard 5 mm leaf width multi-leaf collimator, and a linear accelerator dedicated for radiosurgery and hypofractionated therapy with a 2.5 mm leaf width collimator. For the concave spine lesions, intensity modulation was required to provide adequate conformality; hence, only IMRT plans were evaluated using either the standard or small leaf-width collimators.</p> <p>A total of 70 treatment plans were generated and each plan was individually optimized according to the technique employed. The Generalized Estimating Equation (GEE) was used to separate the impact of treatment technique from the MLC system on plan outcome, and t-tests were performed to evaluate statistical differences in target coverage and organ sparing between plans.</p> <p>Results</p> <p>The lesions ranged in size from 2.6 to 12.5 cc, 17.5 to 153 cc, and 20.9 to 87.7 cc for the brain, liver, and spine groups, respectively. As a group, brain lesions were smaller than spine and liver lesions. While brain and liver lesions were primarily ellipsoidal, spine lesions were more complex in shape, as they were all concave. Therefore, the brain and the liver groups were compared for volume effect, and the liver and spine groups were compared for shape. For the brain and liver groups, both the radiosurgery MLC and the IMRT technique contributed to the dose sparing of organs-at-risk(OARs), as dose in the high-dose regions of these OARs was reduced up to 15%, compared to the non-IMRT techniques employing a 5 mm leaf-width collimator. Also, the dose reduction contributed by the fine leaf-width MLC decreased, as dose savings at all levels diminished from 4 – 11% for the brain group to 1 – 5% for the liver group, as the target structures decreased in volume. The fine leaf-width collimator significantly improved spinal cord sparing, with dose reductions of 14 – 19% in high to middle dose regions, compared to the 5 mm leaf width collimator.</p> <p>Conclusion</p> <p>The fine leaf-width MLC in combination with the IMRT technique can yield dosimetric benefits in radiosurgery and hypofractionated radiotherapy. Treatment of small lesions in cases involving complex target/OAR geometry will especially benefit from use of a fine leaf-width MLC and the use of IMRT.</p

    Mixed Attention Network for Cross-domain Sequential Recommendation

    Full text link
    In modern recommender systems, sequential recommendation leverages chronological user behaviors to make effective next-item suggestions, which suffers from data sparsity issues, especially for new users. One promising line of work is the cross-domain recommendation, which trains models with data across multiple domains to improve the performance in data-scarce domains. Recent proposed cross-domain sequential recommendation models such as PiNet and DASL have a common drawback relying heavily on overlapped users in different domains, which limits their usage in practical recommender systems. In this paper, we propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information. Firstly, we propose a local/global encoding layer to capture the domain-specific/cross-domain sequential pattern. Then we propose a mixed attention layer with item similarity attention, sequence-fusion attention, and group-prototype attention to capture the local/global item similarity, fuse the local/global item sequence, and extract the user groups across different domains, respectively. Finally, we propose a local/global prediction layer to further evolve and combine the domain-specific and cross-domain interests. Experimental results on two real-world datasets (each with two domains) demonstrate the superiority of our proposed model. Further study also illustrates that our proposed method and components are model-agnostic and effective, respectively. The code and data are available at https://github.com/Guanyu-Lin/MAN.Comment: WSDM 202

    TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models

    Full text link
    Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked. Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models' potential exposure during instruction tuning. In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning. All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities. For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8\% to 2\% after training on our datasets. This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs. Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines. Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks
    • …
    corecore